Skip to content

Conversation

@ChenSammi
Copy link
Contributor

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
Can we add some tests using robot tests by enabling ChunkEncodingEnabled if such an option is exposed in aws cli or a UT

@ChenSammi
Copy link
Contributor Author

LGTM.
Can we add some tests using robot tests by enabling ChunkEncodingEnabled if such an option is exposed in aws cli or a UT

Thanks @bharatviswa504 , will add the robot test in next patch.

Copy link
Member

@elek elek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 LGTM

Thanks the debug and fix @ChenSammi

I agree with @bharatviswa504, without unit/acceptance it's hard to test.

But based on the description of the Jira I created https://issues.apache.org/jira/browse/HDDS-3866 to test it.

I can confirm that this patch fixes the problem. Without the patch I got 6004226 bytes size instead of 6000000 on the datanode:

bash-4.2$ ls -la ./hdds/hdds/50b91002-a790-4532-983c-7e220c5df426/current/containerDir0/1/chunks/104399386074808324.block
-rw-r--r-- 1 hadoop users 6004226 Jun 24 14:16 ./hdds/hdds/50b91002-a790-4532-983c-7e220c5df426/current/containerDir0/1/chunks/104399386074808324.block

As it seems to be very big problem, and it's tested with HDDS-3866, I will merge it without unit test. After merging HDDS-3866 we should add the new freon test to the acceptance test suite...

@elek elek changed the title HDDS-3512. s3g multi-upload saved content incorrect when client uses … HDDS-3512. s3g multi-part-upload saved incorrect content using streaming Jun 24, 2020
@elek elek merged commit fb3902f into apache:master Jun 24, 2020
errose28 added a commit to errose28/ozone that referenced this pull request Jun 25, 2020
* upstream/master: (56 commits)
  HDDS-3264. Fix TestCSMMetrics.java. (apache#1120)
  HDDS-3858. Remove support to start Ozone and HDFS datanodes in the same JVM (apache#1117)
  HDDS-3704. Update all the documentation to use ozonefs-hadoop2/3 instead of legacy/current (apache#1099)
  HDDS-3773. Add OMDBDefinition to define structure of om.db. (apache#1076)
  Revert "HDDS-3263. Fix TestCloseContainerByPipeline.java. (apache#1119)" (apache#1126)
  HDDS-3821. Disable Ozone SPNEGO should not fall back to hadoop.http.a… (apache#1101)
  HDDS-3819. OzoneManager#listVolumeByUser ignores userName parameter when ACL is enabled (apache#1087)
  HDDS-3779. Add csi interface documents to show how to use ozone csi (apache#1059)
  HDDS-3857. Datanode in compose/ozonescripts can't be started (apache#1116)
  HDDS-3430. Enable TestWatchForCommit test cases. (apache#1114)
  HDDS-3263. Fix TestCloseContainerByPipeline.java. (apache#1119)
  HDDS-3512. s3g multi-part-upload saved incorrect content using streaming (apache#1092)
  HDDS-3836. Modify ContainerPlacementPolicyFactory JavaDoc (apache#1097)
  HDDS-3780. Replace the imagePullPolicy from always to IfNotPresent (apache#1055)
  HDDS-3847. Change OMNotLeaderException logging to DEBUG (apache#1118)
  HDDS-3745. Improve OM and SCM performance with 64% by avoid collect datanode information to s3g (apache#1031)
  HDDS-3286. BasicOzoneFileSystem  support batchDelete. (apache#814)
  HDDS-3850. Update the admin document to let user know how to show the status of all rules. (apache#1109)
  HDDS-3848. Add ratis.thirdparty.version in main pom.xml (apache#1108)
  HDDS-3815. Avoid buffer copy in ContainerCommandRequestProto. (apache#1085)
  ...
@ChenSammi
Copy link
Contributor Author

Thanks @bharatviswa504 and @elek for the review and verify.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants